161 research outputs found

    A systematic approach to atomicity decomposition in Event-B

    No full text
    Event-B is a state-based formal method that supports a refinement process in which an abstract model is elaborated towards an implementation in a step-wise manner. One weakness of Event-B is that control flow between events is typically modelled implicitly via variables and event guards. While this fits well with Event-B refinement, it can make models involving sequencing of events more difficult to specify and understand than if control flow was explicitly specified. New events may be introduced in Event-B refinement and these are often used to decompose the atomicity of an abstract event into a series of steps. A second weakness of Event-B is that there is no explicit link between such new events that represent a step in the decomposition of atomicity and the abstract event to which they contribute. To address these weaknesses, atomicity decomposition diagrams support the explicit modelling of control flow and refinement relationships for new events. In previous work,the atomicity decomposition approach has been evaluated manually in the development of two large case studies, a multi media protocol and a spacecraft sub-system. The evaluation results helped us to develop a systematic definition of the atomicity decomposition approach, and to develop a tool supporting the approach. In this paper we outline this systematic definition of the approach, the tool that supports it and evaluate the contribution that the tool makes

    Using lightweight modeling to understand chord

    Full text link

    AToM3: A Tool for Multi-formalism and Meta-modelling

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/3-540-45923-5_12Proceedings of 5th International Conference, FASE 2002 Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2002 Grenoble, France, April 8–12, 2002This article introduces the combined use of multi-formalism modelling and meta-modelling to facilitate computer assisted modelling of complex systems. The approach allows one to model different parts of a system using different formalisms. Models can be automatically converted between formalisms thanks to information found in a Formalism Transformation Graph (FTG), proposed by the authors. To aid in the automatic generation of multi-formalism modelling tools, formalisms are modelled in their own right (at a meta-level) within an appropriate formalism. This has been implemented in the interactive tool AToM3. This tool is used to describe formalisms commonly used in the simulation of dynamical systems, as well as to generate custom tools to process (create, edit, transform, simulate, optimise, ...) models expressed in the corresponding formalism. AToM3 relies on graph rewriting techniques and graph grammars to perform the transformations between formalisms as well as for other tasks, such as code generation and operational semantics specification.This paper has been partially sponsored by the Spanish Interdepartmental Commission of Science and Technology (CICYT), project number TEL1999-0181. Prof.Vangheluwe gratefully acknowledges partial support for this work by a National Sciences and Engineering Research Council of Canada (NSERC) Individual Research Grant

    Optimizing Feature Interaction Detection

    Get PDF
    © 2017, Springer International Publishing AG. The feature interaction problem has been recognized as a general problem of software engineering. The problem appears when a combination of features interacts generating a conflict, exhibiting a behaviour that is unexpected for the features considered in isolation, possibly resulting in some critical safety violation. Verification of absence of critical feature interactions has been the subject of several studies. In this paper, we focus on functional interactions and we address the problem of the 3-way feature interactions, i.e. interactions that occur only when three features are all included in the system, but not when only two of them are. In this setting, we define a widely applicable definition framework, within which we show that a 3 (or greater)-way interaction is always caused by a 2-way interaction, i.e. that pairwise sampling is complete, hence reducing to quadratic the complexity of automatic detection of incorrect interaction

    A Rigorous Correctness Proof for Pastry

    Get PDF
    International audiencePeer-to-peer protocols for maintaining distributed hash tables, such as Pastry or Chord, have become popular for a class of Internet applications. While such protocols promise certain properties concerning correctness and performance, verification attempts using formal methods invariably discover border cases that violate some of those guarantees. Tianxiang Lu reported correctness problems in published versions of Pastry and also developed a model, which he called LuPastry, for which he provided a partial proof of correct delivery assuming no node departures, mechanized in the TLA+ Proof System. Lu's proof is based on certain assumptions that were left unproven. We found counterexamples to several of these assumptions. In this paper, we present a revised model and rigorous proof of correct delivery, which we call LuPastry+. Aside from being the first complete proof, LuPastry+ also improves upon Lu's work by reformulating parts of the specification in such a way that the reasoning complexity is confined to a small part of the proof

    A Study of the Learnability of Relational Properties: Model Counting Meets Machine Learning (MCML)

    Full text link
    This paper introduces the MCML approach for empirically studying the learnability of relational properties that can be expressed in the well-known software design language Alloy. A key novelty of MCML is quantification of the performance of and semantic differences among trained machine learning (ML) models, specifically decision trees, with respect to entire (bounded) input spaces, and not just for given training and test datasets (as is the common practice). MCML reduces the quantification problems to the classic complexity theory problem of model counting, and employs state-of-the-art model counters. The results show that relatively simple ML models can achieve surprisingly high performance (accuracy and F1-score) when evaluated in the common setting of using training and test datasets - even when the training dataset is much smaller than the test dataset - indicating the seeming simplicity of learning relational properties. However, MCML metrics based on model counting show that the performance can degrade substantially when tested against the entire (bounded) input space, indicating the high complexity of precisely learning these properties, and the usefulness of model counting in quantifying the true performance

    Towards a theory of conceptual design for software

    Get PDF
    Concepts are the building blocks of software systems. They are not just subjective mental constructs, but are objective features of a system's design: increments of functionality that were consciously introduced by a designer to serve particular purposes. This essay argues for viewing the design of software in terms of concepts, with their invention (or adoption) and refinement as the central activity of software design. A family of products can be characterized by arranging concepts in a dependence graph from which coherent concept subsets can be extracted. Just as bugs can be found in the code of a function prior to testing by reviewing the programmer's argument for its correctness, so flaws can be found in a software design by reviewing an argument by the designer. This argument consists of providing, for each concept, a single compelling purpose, and demonstrating how the concept fulfills the purpose with an archetypal scenario called an 'operational principle'. Some simple conditions (primarily in the relationship between concepts and their purposes) can then be applied to reveal flaws in the conceptual design.SUTD-MIT International Design Centre (IDC

    Dietary exposures and allergy prevention in high-risk infants

    Get PDF
    Allergic conditions in children are a prevalent health concern in Canada. The burden of disease and the societal costs of proper diagnosis and management are considerable, making the primary prevention of allergic conditions a desirable health care objective. This position statement reviews current evidence on dietary exposures and allergy prevention in infants at high risk for developing allergic conditions. It revisits previous dietary recommendations for pregnancy, breastfeeding and formula feeding, and provides an approach for introducing solid foods to high-risk infants. While there is no evidence that delaying the introduction of any specific food beyond six months of age helps to prevent allergy, the protective effect of early introduction of potentially allergenic foods (at four to six months of age) remains under investigation. Recent research appears to suggest that regularly ingesting a new, potentially allergenic food may be as important as when that food is first introduced. © Canadian Paediatric Society 2013

    Revisiting Goal-Oriented Requirements Engineering with a Regulation View

    Get PDF
    Goal-Oriented Requirements Engineering (GORE) is considered to be one of the main achievements that the Requirements Engineering field has produced since its inception. Several GORE methods were designed in the last twenty years in both research and industry. In analyzing individual and organizational behavior, goals appear as a natural element. There are other organizational models that may better explain human behavior, albeit at the expense of more complex models. We present one such alternative model that explains individual and organizational survival through continuous regulation. We give our point of view of the changes needed in GORE methods in order to support this alternative view through the use of maintenance goals and beliefs. We illustrate our discussion with the real example of a family practitioner association that needed a new information system

    Requirements Engineering

    Get PDF
    Requirements Engineering (RE) aims to ensure that systems meet the needs of their stakeholders including users, sponsors, and customers. Often consid- ered as one of the earliest activities in software engineering, it has developed into a set of activities that touch almost every step of the software development process. In this chapter, we reflect on how the need for RE was first recognised and how its foundational concepts were developed. We present the seminal papers on four main activities of the RE process, namely (i) elicitation, (ii) modelling & analysis, (iii) as- surance, and (iv) management & evolution. We also discuss some current research challenges in the area, including security requirements engineering as well as RE for mobile and ubiquitous computing. Finally, we identify some open challenges and research gaps that require further exploration
    • …
    corecore